高性能深神经网络(DNN)的系统在边缘环境中的需求量很高。由于其较高的计算复杂性,在严格限制计算资源的边缘设备上部署DNN是一项挑战。在本文中,我们通过结合最近备份的参数还原技术来得出一个紧凑的DNN模型,称为DSODENET:神经ODE(普通微分方程)和DSC(可分开的卷积)。 Neural Ode利用了Resnet和Ode之间的相似性,并在多层之间共享重量参数的大部分,这大大降低了内存消耗。我们将dsodeNet应用于域适应性,作为与图像分类数据集的实际用例。我们还为dsodeNet提出了一种基于资源的FPGA设计,其中所有参数和特征地图除了预处理和后处理层外,都可以映射到芯片上的记忆中。它是在Xilinx ZCU104板上实施的,并根据域的适应精度,推理速度,FPGA资源利用率和与软件对应物相比进行了评估。结果表明,与我们的基线神经ODE实施相比,DSODENET获得了可比较或稍好的域适应精度,而没有预处理和后处理层的总参数大小降低了54.2%至79.8%。我们的FPGA实施将推理速度加速23.8倍。
translated by 谷歌翻译
LIDAR(光检测和测距)SLAM(同时定位和映射)作为室内清洁,导航和行业和家庭中许多其他有用应用的基础。从一系列LIDAR扫描,它构建了一个准确的全球一致的环境模型,并估计它内部的机器人位置。 SLAM本质上是计算密集的;在具有有限的加工能力的移动机器人上实现快速可靠的SLAM系统是一个具有挑战性的问题。为了克服这种障碍,在本文中,我们提出了一种普遍,低功耗和资源有效的加速器设计,用于瞄准资源限制的FPGA。由于扫描匹配位于SLAM的核心,所提出的加速器包括可编程逻辑部分上的专用扫描匹配核心,并提供软件接口以便于使用。我们的加速器可以集成到各种SLAM方法,包括基于ROS(机器人操作系统) - 基于ROS(机器人操作系统),并且用户可以切换到不同的方法而不修改和重新合成逻辑部分。我们将加速器集成为三种广泛使用的方法,即扫描匹配,粒子滤波器和基于图形的SLAM。我们使用现实世界数据集评估资源利用率,速度和输出结果质量方面的设计。 Pynq-Z2板上的实验结果表明,我们的设计将扫描匹配和循环闭合检测任务加速高达14.84倍和18.92倍,分别在上述方法中产生4.67倍,4.00倍和4.06倍的整体性能改进。我们的设计能够实现实时性能,同时仅消耗2.4W并保持精度,可与软件对应物乃至最先进的方法相当。
translated by 谷歌翻译
Pre-trained language models, despite their rapid advancements powered by scale, still fall short of robust commonsense capabilities. And yet, scale appears to be the winning recipe; after all, the largest models seem to have acquired the largest amount of commonsense capabilities. Or is it? In this paper, we investigate the possibility of a seemingly impossible match: can smaller language models with dismal commonsense capabilities (i.e., GPT-2), ever win over models that are orders of magnitude larger and better (i.e., GPT-3), if the smaller models are powered with novel commonsense distillation algorithms? The key intellectual question we ask here is whether it is possible, if at all, to design a learning algorithm that does not benefit from scale, yet leads to a competitive level of commonsense acquisition. In this work, we study the generative models of commonsense knowledge, focusing on the task of generating generics, statements of commonsense facts about everyday concepts, e.g., birds can fly. We introduce a novel commonsense distillation framework, I2D2, that loosely follows the Symbolic Knowledge Distillation of West et al. but breaks the dependence on the extreme-scale models as the teacher model by two innovations: (1) the novel adaptation of NeuroLogic Decoding to enhance the generation quality of the weak, off-the-shelf language models, and (2) self-imitation learning to iteratively learn from the model's own enhanced commonsense acquisition capabilities. Empirical results suggest that scale is not the only way, as novel algorithms can be a promising alternative. Moreover, our study leads to a new corpus of generics, Gen-A-Tomic, that is of the largest and highest quality available to date.
translated by 谷歌翻译
We consider task allocation for multi-object transport using a multi-robot system, in which each robot selects one object among multiple objects with different and unknown weights. The existing centralized methods assume the number of robots and tasks to be fixed, which is inapplicable to scenarios that differ from the learning environment. Meanwhile, the existing distributed methods limit the minimum number of robots and tasks to a constant value, making them applicable to various numbers of robots and tasks. However, they cannot transport an object whose weight exceeds the load capacity of robots observing the object. To make it applicable to various numbers of robots and objects with different and unknown weights, we propose a framework using multi-agent reinforcement learning for task allocation. First, we introduce a structured policy model consisting of 1) predesigned dynamic task priorities with global communication and 2) a neural network-based distributed policy model that determines the timing for coordination. The distributed policy builds consensus on the high-priority object under local observations and selects cooperative or independent actions. Then, the policy is optimized by multi-agent reinforcement learning through trial and error. This structured policy of local learning and global communication makes our framework applicable to various numbers of robots and objects with different and unknown weights, as demonstrated by numerical simulations.
translated by 谷歌翻译
Artificial life is a research field studying what processes and properties define life, based on a multidisciplinary approach spanning the physical, natural and computational sciences. Artificial life aims to foster a comprehensive study of life beyond "life as we know it" and towards "life as it could be", with theoretical, synthetic and empirical models of the fundamental properties of living systems. While still a relatively young field, artificial life has flourished as an environment for researchers with different backgrounds, welcoming ideas and contributions from a wide range of subjects. Hybrid Life is an attempt to bring attention to some of the most recent developments within the artificial life community, rooted in more traditional artificial life studies but looking at new challenges emerging from interactions with other fields. In particular, Hybrid Life focuses on three complementary themes: 1) theories of systems and agents, 2) hybrid augmentation, with augmented architectures combining living and artificial systems, and 3) hybrid interactions among artificial and biological systems. After discussing some of the major sources of inspiration for these themes, we will focus on an overview of the works that appeared in Hybrid Life special sessions, hosted by the annual Artificial Life Conference between 2018 and 2022.
translated by 谷歌翻译
Analyzing defenses in team sports is generally challenging because of the limited event data. Researchers have previously proposed methods to evaluate football team defense by predicting the events of ball gain and being attacked using locations of all players and the ball. However, they did not consider the importance of the events, assumed the perfect observation of all 22 players, and did not fully investigated the influence of the diversity (e.g., nationality and sex). Here, we propose a generalized valuation method of defensive teams by score-scaling the predicted probabilities of the events. Using the open-source location data of all players in broadcast video frames in football games of men's Euro 2020 and women's Euro 2022, we investigated the effect of the number of players on the prediction and validated our approach by analyzing the games. Results show that for the predictions of being attacked, scoring, and conceding, all players' information was not necessary, while that of ball gain required information on three to four offensive and defensive players. With game analyses we explained the excellence in defense of finalist teams in Euro 2020. Our approach might be applicable to location data from broadcast video frames in football games.
translated by 谷歌翻译
有一段漫长的历史,努力与我们周围的实体和空间探索音乐元素,例如Musique Concr \'Ete和Ambient Music。在计算机音乐和数字艺术的背景下,还设计了集中在周围物体和物理空间上的互动体验。近年来,随着设备的开发和普及,在扩展现实中设计了越来越多的作品,以创造这种音乐体验。在本文中,我们描述了MR4MR,这是一项声音安装工作,使用户可以在混合现实的背景下体验与周围空间相互作用产生的旋律(MR)。用户使用HoloLens,用户可以撞击周围环境中真实对象的虚拟对象。然后,通过遵循物体发出的声音并使用音乐生成机器学习模型进行随机变化并逐渐改变旋律的声音,用户可以感觉到其环境旋律“转世”。
translated by 谷歌翻译
我们提出了一个名为“ Visual配方流”的新的多模式数据集,使我们能够学习每个烹饪动作的结果。数据集由对象状态变化和配方文本的工作流程组成。状态变化表示为图像对,而工作流则表示为食谱流图(R-FG)。图像对接地在R-FG中,该R-FG提供了交叉模式关系。使用我们的数据集,可以尝试从多模式常识推理和程序文本生成来尝试一系列应用程序。
translated by 谷歌翻译
自动故障检测是许多运动的主要挑战。在比赛中,裁判根据规则在视觉上判断缺点。因此,在判断时确保客观性和公平性很重要。为了解决这个问题,一些研究试图使用传感器和机器学习来自动检测故障。但是,与传感器的附件和设备(例如高速摄像头)相关的问题,这些问题与裁判的视觉判断以及故障检测模型的可解释性相抵触。在这项研究中,我们提出了一个用于非接触测量的断层检测系统。我们使用了根据多个合格裁判的判断进行训练的姿势估计和机器学习模型,以实现公平的错误判断。我们使用智能手机视频在包括东京奥运会的奖牌获得者中,使用了正常比赛的智能手机视频,并有意地走路。验证结果表明,所提出的系统的平均准确度超过90%。我们还透露,机器学习模型根据种族步行规则检测到故障。此外,奖牌获得者的故意故障步行运动与大学步行者不同。这一发现符合更通用的故障检测模型的实现。该代码和数据可在https://github.com/szucchini/racewalk-aijudge上获得。
translated by 谷歌翻译
机器人进行深入增强学习(RL)的导航,在复杂的环境下实现了更高的性能,并且表现良好。同时,对深度RL模型的决策的解释成为更多自主机器人安全性和可靠性的关键问题。在本文中,我们提出了一种基于深入RL模型的注意力分支的视觉解释方法。我们将注意力分支与预先训练的深度RL模型联系起来,并通过以监督的学习方式使用受过训练的深度RL模型作为正确标签来训练注意力分支。由于注意力分支经过训练以输出与深RL模型相同的结果,因此获得的注意图与具有更高可解释性的代理作用相对应。机器人导航任务的实验结果表明,所提出的方法可以生成可解释的注意图以进行视觉解释。
translated by 谷歌翻译